neural sparse representation
Supplementary Material for Neural Sparse Representation for Image Restoration Y uchen Fan, Jiahui Y u, Yiqun Mei, Y ulun Zhang, Y un Fu, Ding Liu
In Eq. 9, we reduce the soft sparsity constraints to the weighted sum of convolution kernels. Here, we will give a detailed proof of the derivation process. Formally, i denotes the index of the activated group, then s.t. Figure 1: Unified network structure for image restoration (left). In our paper, we claim the additional complexity of our method is negligible. As shown in Figure 1, the structure is stacked by multiple residual blocks and additional convolution layers for input and output.
- North America > United States > Illinois (0.05)
- North America > Canada (0.05)
Review for NeurIPS paper: Neural Sparse Representation for Image Restoration
Weaknesses: The paper is rather weak on the theoretical side of sparsity and the existing work. The paper claims in the introduction that "sparsity of hidden representation in deep neural networks cannot be solved by iterative optimization as sparse coding". I do not understand this claims since algorithms such as LISTA do compute sparse coding from few layers in deep networks. The fact that sparsity is needed to do denoising, compression or inverse problems is well understood independantly from neural networks and result from work carried by may researchers such as Donoho between 1995 and 2005. I do n \ot understand why they say that such sparsity can not be implemented given that a ReLU is the proximal operator of a positive l1 sparse coder, that many algorithms implement a sparse code with such architectures, and that such architectures with ReLU get very good performance for denoising and inverse problems as shown by "Convolutional Neural Networks for Inverse Problems in Imaging: A Review" published in 2017, and much more work has been done so far.
Neural Sparse Representation for Image Restoration
Inspired by the robustness and efficiency of sparse representation in sparse coding based image restoration models, we investigate the sparsity of neurons in deep networks. Our method structurally enforces sparsity constraints upon hidden neurons. The sparsity constraints are favorable for gradient-based learning algorithms and attachable to convolution layers in various networks. Sparsity in neurons enables computation saving by only operating on non-zero components without hurting accuracy. Meanwhile, our method can magnify representation dimensionality and model capacity with negligible additional computation cost.